在处理小型数据集上的临床文本分类时,最近的研究证实,经过调整的多层感知器的表现优于其他生成分类器,包括深度学习。为了提高神经网络分类器的性能,可以有效地使用学习表示的功能选择。但是,大多数特征选择方法仅估计变量之间的线性依赖性程度,并根据单变量统计测试选择最佳特征。此外,学习表示所涉及的特征空间的稀疏性被忽略了。目标:因此,我们的目标是通过压缩临床代表性空间来访问一种替代方法来解决稀疏性,在这种情况下,法国临床笔记也可以有效地处理有限的法国临床笔记。方法:本研究提出了一种自动编码器学习算法来利用临床注释表示的稀疏性。动机是通过降低临床音符表示特征空间的维度来确定如何压缩稀疏的高维数据。然后在受过训练和压缩的特征空间中评估分类器的分类性能。结果:建议的方法为每种评估提供了高达3%的总体绩效增长。最后,分类器在检测患者病情时达到了92%的准确性,91%的召回,91%的精度和91%的F1得分。此外,通过应用理论信息瓶颈框架来证明压缩工作机制和自动编码器预测过程。
translated by 谷歌翻译
本文提出的研究目的是通过在楚圣特贾斯汀医院的研究数据仓库中的医生笔记中,基于自然语言处理制定自然语言处理的机器学习算法。首先,使用字词(弓),术语频率逆文档频率(TFIDF)和神经单词嵌入(Word2VEC)采用单词表示学习技术。每个表示技术旨在在关键护理数据中保留语义和句法分析。它有助于丰富单词表示的相互信息,并导致进一步适当的分析步骤的优势。其次,通过从前一步的创建的词表示矢量空间来使用机器学习分类剂来检测心力衰竭或稳定患者的患者条件。该机器学习方法基于监督二进制分类算法,包括Logistic回归(LR),高斯天真贝叶斯(Gaussiannb)和多层的Perceptron神经网络(MLPNN)。从技术上讲,它主要优化培训分类器期间的经验损失。结果,将完成自动学习算法以利用高分类性能,包括精度(ACC),精度(Pre),召回(REC)和F1得分(F1)。结果表明,TFIDF和MLPNN的组合总是表现出与所有整体性能的其他组合。在没有任何特征选择的情况下,所提出的框架分别产生了84%和82%,85%和83%的ACC,Pre,Rec和F1的整体分类性能。值得注意的是,如果特征选择很好,整体性能最终会为每个评估提高4%。
translated by 谷歌翻译
临床数据管理系统和人工智能方法的快速进展使个性化药物的时代能够。重症监护单位(ICU)是这种发展的理想临床研究环境,因为它们收集了许多临床数据,并且是高度计算机化的环境。我们在使用临床自然语言的前瞻性ICU数据库中设计了一种回顾性临床研究,帮助早期诊断严重生病的儿童心力衰竭。该方法包括学习算法的实证实验,以了解法国临床票据数据的隐藏解释和呈现。本研究包括1386名患者的临床票据,符合5444行票据。有1941个阳性案件(总计36%)和3503个使用标准方法的独立医生分类的负案件。多层的感知者神经网络优于其他判别和生成的分类器。因此,所提出的框架产生了总体分类性能,精度为89%,召回88%和89%的精度。本研究成功地应用了学习代表和机器学习算法,以检测单一法国机构中的临床自然语言的心力衰竭。需要进一步的工作来在其他机构和其他语言中使用相同的方法。
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
Non-linear state-space models, also known as general hidden Markov models, are ubiquitous in statistical machine learning, being the most classical generative models for serial data and sequences in general. The particle-based, rapid incremental smoother PaRIS is a sequential Monte Carlo (SMC) technique allowing for efficient online approximation of expectations of additive functionals under the smoothing distribution in these models. Such expectations appear naturally in several learning contexts, such as likelihood estimation (MLE) and Markov score climbing (MSC). PARIS has linear computational complexity, limited memory requirements and comes with non-asymptotic bounds, convergence results and stability guarantees. Still, being based on self-normalised importance sampling, the PaRIS estimator is biased. Our first contribution is to design a novel additive smoothing algorithm, the Parisian particle Gibbs PPG sampler, which can be viewed as a PaRIS algorithm driven by conditional SMC moves, resulting in bias-reduced estimates of the targeted quantities. We substantiate the PPG algorithm with theoretical results, including new bounds on bias and variance as well as deviation inequalities. Our second contribution is to apply PPG in a learning framework, covering MLE and MSC as special examples. In this context, we establish, under standard assumptions, non-asymptotic bounds highlighting the value of bias reduction and the implicit Rao--Blackwellization of PPG. These are the first non-asymptotic results of this kind in this setting. We illustrate our theoretical results with numerical experiments supporting our claims.
translated by 谷歌翻译
Machine Reading Comprehension has become one of the most advanced and popular research topics in the fields of Natural Language Processing in recent years. The classification of answerability questions is a relatively significant sub-task in machine reading comprehension; however, there haven't been many studies. Retro-Reader is one of the studies that has solved this problem effectively. However, the encoders of most traditional machine reading comprehension models in general and Retro-Reader, in particular, have not been able to exploit the contextual semantic information of the context completely. Inspired by SemBERT, we use semantic role labels from the SRL task to add semantics to pre-trained language models such as mBERT, XLM-R, PhoBERT. This experiment was conducted to compare the influence of semantics on the classification of answerability for the Vietnamese machine reading comprehension. Additionally, we hope this experiment will enhance the encoder for the Retro-Reader model's Sketchy Reading Module. The improved Retro-Reader model's encoder with semantics was first applied to the Vietnamese Machine Reading Comprehension task and obtained positive results.
translated by 谷歌翻译
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world, and early DR detection is necessary to prevent vision loss and support an appropriate treatment. In this work, we leverage interactive machine learning and introduce a joint learning framework, termed DRG-Net, to effectively learn both disease grading and multi-lesion segmentation. Our DRG-Net consists of two modules: (i) DRG-AI-System to classify DR Grading, localize lesion areas, and provide visual explanations; (ii) DRG-Expert-Interaction to receive feedback from user-expert and improve the DRG-AI-System. To deal with sparse data, we utilize transfer learning mechanisms to extract invariant feature representations by using Wasserstein distance and adversarial learning-based entropy minimization. Besides, we propose a novel attention strategy at both low- and high-level features to automatically select the most significant lesion information and provide explainable properties. In terms of human interaction, we further develop DRG-Net as a tool that enables expert users to correct the system's predictions, which may then be used to update the system as a whole. Moreover, thanks to the attention mechanism and loss functions constraint between lesion features and classification features, our approach can be robust given a certain level of noise in the feedback of users. We have benchmarked DRG-Net on the two largest DR datasets, i.e., IDRID and FGADR, and compared it to various state-of-the-art deep learning networks. In addition to outperforming other SOTA approaches, DRG-Net is effectively updated using user feedback, even in a weakly-supervised manner.
translated by 谷歌翻译
Unbiased learning to rank (ULTR) studies the problem of mitigating various biases from implicit user feedback data such as clicks, and has been receiving considerable attention recently. A popular ULTR approach for real-world applications uses a two-tower architecture, where click modeling is factorized into a relevance tower with regular input features, and a bias tower with bias-relevant inputs such as the position of a document. A successful factorization will allow the relevance tower to be exempt from biases. In this work, we identify a critical issue that existing ULTR methods ignored - the bias tower can be confounded with the relevance tower via the underlying true relevance. In particular, the positions were determined by the logging policy, i.e., the previous production model, which would possess relevance information. We give both theoretical analysis and empirical results to show the negative effects on relevance tower due to such a correlation. We then propose three methods to mitigate the negative confounding effects by better disentangling relevance and bias. Empirical results on both controlled public datasets and a large-scale industry dataset show the effectiveness of the proposed approaches.
translated by 谷歌翻译
Privacy of machine learning models is one of the remaining challenges that hinder the broad adoption of Artificial Intelligent (AI). This paper considers this problem in the context of image datasets containing faces. Anonymization of such datasets is becoming increasingly important due to their central role in the training of autonomous cars, for example, and the vast amount of data generated by surveillance systems. While most prior work de-identifies facial images by modifying identity features in pixel space, we instead project the image onto the latent space of a Generative Adversarial Network (GAN) model, find the features that provide the biggest identity disentanglement, and then manipulate these features in latent space, pixel space, or both. The main contribution of the paper is the design of a feature-preserving anonymization framework, StyleID, which protects the individuals' identity, while preserving as many characteristics of the original faces in the image dataset as possible. As part of the contribution, we present a novel disentanglement metric, three complementing disentanglement methods, and new insights into identity disentanglement. StyleID provides tunable privacy, has low computational complexity, and is shown to outperform current state-of-the-art solutions.
translated by 谷歌翻译
Neural compression offers a domain-agnostic approach to creating codecs for lossy or lossless compression via deep generative models. For sequence compression, however, most deep sequence models have costs that scale with the sequence length rather than the sequence complexity. In this work, we instead treat data sequences as observations from an underlying continuous-time process and learn how to efficiently discretize while retaining information about the full sequence. As a consequence of decoupling sequential information from its temporal discretization, our approach allows for greater compression rates and smaller computational complexity. Moreover, the continuous-time approach naturally allows us to decode at different time intervals. We empirically verify our approach on multiple domains involving compression of video and motion capture sequences, showing that our approaches can automatically achieve reductions in bit rates by learning how to discretize.
translated by 谷歌翻译